On Bayes Risk Lower Bounds

نویسندگان

  • Xi Chen
  • Adityanand Guntuboyina
  • Yuchen Zhang
چکیده

This paper provides a general technique for lower bounding the Bayes risk of statistical estimation, applicable to arbitrary loss functions and arbitrary prior distributions. A lower bound on the Bayes risk not only serves as a lower bound on the minimax risk, but also characterizes the fundamental limit of any estimator given the prior knowledge. Our bounds are based on the notion of f -informativity (Csiszár, 1972), which is a function of the underlying class of probability measures and the prior. Application of our bounds requires upper bounds on the f -informativity, thus we derive new upper bounds on f -informativity which often lead to tight Bayes risk lower bounds. Our technique leads to generalizations of a variety of classical minimax bounds (e.g., generalized Fano’s inequality). Our Bayes risk lower bounds can be directly applied to several concrete estimation problems, including Gaussian location models, generalized linear models, and principal component analysis for spiked covariance models. To further demonstrate the applications of our Bayes risk lower bounds to machine learning problems, we present two new theoretical results: (1) a precise characterization of the minimax risk of learning spherical Gaussian mixture models under the smoothed analysis framework, and (2) lower bounds for the Bayes risk under a natural prior for both the prediction and estimation errors for high-dimensional sparse linear regression under an improper learning setting.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bounds on the Bayes and minimax risk for signal parameter estimation

A 3 r m h estimating the parameter 0 from a parametrized signal problem (with 0 5 0 5 L) observed through Gaussian white noise, four useful and computable lower bounds for the Bayes risk were developed. For problems with different L and Merent signal to noise ratios, some bounds am superior to the others. The lower bound obtained from taking the maximum of the four, serves not only as a good lo...

متن کامل

On an Improvement over Rényi's Equivocation Bound

We consider the problem of estimating the probability of error in multi-hypothesis testing when MAP criterion is used. This probability, which is also known as the Bayes risk is an important measure in many communication and information theory problems. In general, the exact Bayes risk can be difficult to obtain. Many upper and lower bounds are known in literature. One such upper bound is the e...

متن کامل

Lower Bounds on the Bayes Risk of the Bayesian BTL Model with Applications to Random Graphs

We consider the problem of aggregating pairwise comparisons to obtain a consensus ranking order over a collection of objects. We employ the popular Bradley-Terry-Luce (BTL) model in which each object is associated with a skill parameter which allows us to probabilistically describe the pairwise comparisons between objects. In particular, we employ the Bayesian BTL model which allows for meaning...

متن کامل

Lower Bounds on Bayes Factors for Multinomial Distributions, with Application to Chi-squared Tests of Fit1

Lower bounds on Bayes factors in favor of the null hypothesis in multinomial tests of point null hypotheses are developed. These are then applied to derive lower bounds on Bayes factors in both exact and asymptotic chi-squared testing situations. The general conclusion is that the lower bounds tend to be substantially larger than P-values, raising serious questions concerning the routine use of...

متن کامل

Polyshrink: An Adaptive Variable Selection Procedure That Is Competitive with Bayes Experts

We propose an adaptive shrinkage estimator for use in regression problems charaterized by many predictors, such as wavelet estimation. Adaptive estimators perform well over a variety of circumstances, such as regression models in which few, some or many coefficients are zero. Our estimator, PolyShrink, adaptively varies the amount of shrinkage to suit the estimation task. Whereas hard threshold...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Journal of Machine Learning Research

دوره 17  شماره 

صفحات  -

تاریخ انتشار 2016